-
Notifications
You must be signed in to change notification settings - Fork 29.1k
[SPARK-19746][ML] Faster indexing for logistic aggregator #17078
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
ping @dbtsai @yanboliang |
dbtsai
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple comments.
|
|
||
| val localFeaturesStd = bcFeaturesStd.value | ||
| val localCoefficients = bcCoefficients.value | ||
| val localCoefficients = bcCoefficients.value.toArray |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the first version of LOR, we have the following code which avoid this issue you pointed out.
private val weightsArray = weights match {
case dv: DenseVector => dv.values
case _ =>
throw new IllegalArgumentException(
s"weights only supports dense vector but got type ${weights.getClass}.")
}I think order approach will be more efficient since toArray is only called once (you can add the case for sparse), and for sparse initial coefficients, we will not convert from sparse to dense again and again.
This can be a future work. With L1 applied, the coefficients can be very sparse, so we can compress the coefficients for each iteration, and have specialized implementation for UpdateInPlace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above check got us into trouble because if we don't add @transient lazy val then we'll serialize the coefficients. The call to toArray is really just a small bit of pointer indirection, and while I agree it is not great to call it every time, the extra function call should pale in comparison to the O(numClasses * numFeatures) ops we do in the method.
That said, I'm ok with either solution, just wanted to point out the pros/cons of each. Let me know what you think, thanks for reviewing!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another thought is that by tagging the class member as @transient lazy val we are at least making this hidden problem more explicit. I think using the transient method makes it a bit less likely that someone will come along and make a change in the future that serializes the coefficients. I'll plan to update it here with the transient tag then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is that if coefficients is sparse, we are not just doing the pointer indirection but creating a new dense array from sparse matrix. I know we always pass in a dense matrix so this will not be an issue now, but being said that, in the following code, if we call compress in the coefficients, we may be able to broadcast a smaller object when L1 is applied or in the initial iteration that most of the elements in coefficients are zero.
|
Jenkins test this please. |
|
Test build #73503 has finished for PR 17078 at commit
|
|
|
||
| val localFeaturesStd = bcFeaturesStd.value | ||
| val localCoefficients = bcCoefficients.value | ||
| val localCoefficients = bcCoefficients.value.toArray |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is that if coefficients is sparse, we are not just doing the pointer indirection but creating a new dense array from sparse matrix. I know we always pass in a dense matrix so this will not be an issue now, but being said that, in the following code, if we call compress in the coefficients, we may be able to broadcast a smaller object when L1 is applied or in the initial iteration that most of the elements in coefficients are zero.
| private var lossSum = 0.0 | ||
|
|
||
| private val gradientSumArray = Array.fill[Double](coefficientSize)(0.0D) | ||
| @transient private lazy val coefficientsArray = bcCoefficients.value match { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you have the type of coefficientsArray so people can clear know that it's a primitive array?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'll update it.
| val bcFeaturesStd = spark.sparkContext.broadcast(Array(1.0)) | ||
| val binaryAgg = new LogisticAggregator(bcCoefficientsBinary, bcFeaturesStd, 2, | ||
| fitIntercept = true, multinomial = false) | ||
| val thrownBinary = withClue("binary logistic aggregator cannot handle sparse coefficients") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should handle sparse coefficients for further performance improvement. But not in this PR.
|
Thanks. Merged into master. |
|
Test build #73537 has finished for PR 17078 at commit
|
What changes were proposed in this pull request?
JIRA: SPARK-19746
The following code is inefficient:
localCoefficients(index * numClasses + j)callsVector.applywhich creates a new Breeze vector and indexes that. Even if it is not that slow to create the object, we will generate a lot of extra garbage that may result in longer GC pauses. This is a hot inner loop, so we should optimize wherever possible.How was this patch tested?
I don't think there's a great way to test this patch. It's purely performance related, so unit tests should guarantee that we haven't made any unwanted changes. Empirically I observed between 10-40% speedups just running short local tests. I suspect the big differences will be seen when large data/coefficient sizes have to pause for GC more often. I welcome other ideas for testing.